![]() Method and system for recording and evaluating images in several focal planes.
专利摘要:
The invention relates to a method and system for correcting magnification in image measurement determinations, e.g. to detect creep damage when inspecting components. The method is performed using a computing device that includes one or more processors (118) connected to a user interface (120) and one or more storage devices (122). The method includes capturing a plurality of images of a target object (112). Each image is taken at a different distance (113) from the target object (112). The method also includes determining a distance (113) between a lens (106) used to capture the plurality of images and the target object (112) and determining an enlargement of each captured image. The method further comprises determining a magnification correction with respect to a reference object, determining a change in a size of the target object (112), and outputting the determined change in a size of the target object (112). 公开号:CH710375A2 申请号:CH01590/15 申请日:2015-11-02 公开日:2016-05-13 发明作者:George Harding Kevin;James Batzinger Thomas 申请人:Gen Electric; IPC主号:
专利说明:
STATE OF THE ART The field of the disclosure relates generally to component inspection systems, and more specifically to an imaging system that captures multiple images of a target object to generate a magnification correction that is used to determine dimensions of the target object. In at least some known systems for detecting creep damage, a test rig is used that is set to tight tolerances to ensure repeatability of the creep measurement determination over time. The installation time and the man-hours are considerable because typically many components have to be checked. In some cases, a small amount of incorrect positioning of a portable camera or a curved surface can make it impossible to achieve a fixed magnification for a given sensor. Other methods of correcting for magnification in an image used for accuracy measurements include placing reference targets of a known distance in the image by positioning such targets on the part so that they do not change as the part changes, such as due to creep. Target objects take up extra space on the part and require a larger image. An alternative approach has been to use a precision mounting system to position the sensor very precisely each time. SHORT DESCRIPTION In one embodiment, an inspection imaging system using magnification correction from multiple focal planes includes an image capture device that includes an image capture device and a lens. The system also includes a controller including a user interface, one or more storage devices, and one or more processors communicatively connected to the user interface and the one or more storage devices. The processor is programmed to capture a plurality of images of a target object, each image being captured at a different distance from the target object, to determine a distance between a lens used to capture the plurality of images and the target object , and determine a magnification of each captured image. The processor is also programmed to determine a magnification correction with respect to a reference, to determine a change in a size of the target object and to output the determined change in a size of the target object. In any embodiment of the system, it may be advantageous if the system further comprises an optical path changing switch configured to change the optical path of light between the target object and the lens. In any embodiment of the system, it may be advantageous if the processor is further programmed to receive point spread function information for the lens to determine the distance between the lens and the target object In any embodiment of the system, it may be advantageous if the processor is further programmed to determine a magnification of each captured image using focus length information of the lens. In any embodiment of the system, it may be advantageous if the processor is further programmed to determine the distance between the lens and the target object using a focus clarity of the image. In any embodiment of the system, it may be advantageous if the processor is further programmed to determine the distance between the lens and the target object using at least one of a depth of a focus function and a depth of a defocus function. In another embodiment, the method for correcting a magnification in image measurement determinations comprises taking a plurality of images of a target object, each image being taken at a different distance from the target object, determining a distance between a lens that is used to take the A plurality of images is used, and the target object and determining a magnification of each captured image. The method also includes the determination of a magnification correction with reference to a reference, the determination of a change in a size of the target object and the output of the determined change in a size of the target object. In another embodiment of the method, it can be advantageous if the determination of a distance between the lens, which is used to record the plurality of images, and the target object comprises the reception of point spread function information for the lens. In one embodiment of the method, it can be advantageous if the determination of a magnification of each recorded image comprises the determination of a magnification of each recorded image using focus length information of the lens. In any embodiment of the method, it can be advantageous if the recording of the plurality of images of the target object comprises the recording of a plurality of images of a pattern that is connected to a surface of a component. In any embodiment of the method, it can be advantageous if the recording of the plurality of images of the target object comprises the recording of a plurality of images of a surface feature of a component. In any embodiment of the method, it may be advantageous if the determination of a distance between the lens that is used to take the plurality of images and the target object is the determination of the distance between the lens and the target object using a focus clarity of the Includes image. In any embodiment of the method, it can be advantageous if the determination of a distance between the lens that is used to take the plurality of images and the target object is the determination of the distance between the lens and the target object using at least one depth by the focus function and a depth by the defocus function. In any embodiment of the method, it can be advantageous if the determination of a distance between the lens which is used to record the plurality of images and the target object comprises recording a first of the plurality of focused images. In any embodiment of the method, it can be advantageous if the determination of a distance between the lens that is used to record the plurality of images and the target object comprises recording a remainder of the plurality of blurred images. In yet another embodiment, one or more non-transitory computer readable storage media include computer executable instructions embodied thereon. When executed by at least one processor, the computer-executable instructions cause the processor to capture a plurality of images of a target, each image being captured at a different distance from the target, a distance between the lens used to capture the plurality of images , and to determine the target object, and to determine a magnification of each captured image. The computer-executable instructions also cause the processor to determine a magnification correction with reference to a reference, determine a change in size of the target object, and output the determined change in size of the target object In any embodiment of the computer-readable storage medium, it may be advantageous if the computer-executable instructions further cause the at least one processor to determine the distance between the lens and the target object using a focus clarity of the image. In any embodiment of the computer-readable storage medium, it can be advantageous if the computer-executable instructions further cause the at least one processor to determine the distance between the lens and the target object using at least a depth of the focus function and a depth of the defocus function. In any embodiment of the computer-readable storage medium, it may be advantageous if the computer-executable instructions further cause the at least one processor to record a first of the plurality of focused images. In any embodiment of the computer-readable storage medium, it may be advantageous that the computer-executable instructions further cause the at least one processor to record the remainder of the plurality of blurred images. DRAWINGS These and other features, embodiments, and advantages of the present disclosure will be better understood from the reading of the following detailed description with reference to the accompanying drawings, wherein like characters represent like parts throughout the drawings, wherein: FIG Figure 12 is a schematic representation of an exemplary inspection imaging system employing magnification correction in accordance with an exemplary embodiment of the present disclosure; 2 is a flow chart of an exemplary method for correcting magnification of images of a target object using multiple planes of focus; 3 is an example of changes in the focal plane for a focus-based determination based on three images of a component captured at different depths of focus using the method shown in FIG. 2; and FIG. 4 is a perspective view of component 301 shown in FIG. 3. Unless otherwise indicated, the drawings provided herein are intended to illustrate features of embodiments of this disclosure. It is believed that these features will be applicable in a wide variety of different systems, including one or more embodiments of this disclosure. As such, the drawings are not intended to cover all of the conventional features known to those of ordinary skill in the art to be required for practicing the embodiments disclosed herein. DETAILED DESCRIPTION In the following description and claims, reference is made to a number of terms which are intended to be defined as having the following meanings. The singular forms "a, an" and "der, die, das" include plural references, unless the context clearly states otherwise. "Optional" as an adverb or adjective means that the event or circumstance described below may or may not occur and that the description includes cases where the event occurs and cases where it does not. Approximate wording as used throughout the specification and claims may be used to modify any quantitative representation that could legitimately vary without changing the basic function to which it relates. Accordingly, any value modified by any phrase or phrases such as "about," "approximately", and "substantially" is not intended to be limited to the precise value stated. In at least some cases, the approximate formulation may be the accuracy of a device used to measure the value. Here and throughout the specification and claims, scope restrictions may be combined and / or interchanged, such ranges being identified and including all sub-areas included therein, unless the context or the wording indicates otherwise. As used herein, the terms "processor" and "computer" and related terms, e.g. "Processing device" and "computer device" are not limited to those integrated circuits referred to in the art as computers, but rather broadly refer to a microcontroller, a microcomputer, a programmable logic controller (PLC), an application-specific integrated circuit and other programmable circuits and these terms are used interchangeably herein. In the embodiments described herein, memory includes, but is not limited to, a computer readable medium such as random access memory (RAM) and a computer readable non-volatile medium such as flash memory. Alternatively, a floppy disk, compact disk read-only memory (CD-ROM), magneto-optical disk (MOD), and / or digital versatile disk (DVD) can also be used. In the embodiments described here, additional input channels can also be, but are not limited to, computer peripheral devices that are connected to a user interface, such as a mouse or a keyboard. Alternatively, other computer peripheral devices may be used, including, but not limited to, a scanner, for example. Further, in the exemplary embodiment, additional output channels may include, but are not limited to, a user interface monitor. Further, as used herein, the terms "software" and "firmware" are interchangeable and include any computer program stored in memory for execution by personal computers, workstations, clients and servers. As used herein, the term "non-transitory computer-readable medium" is intended to represent any tangible computer-based device that is used in any method or technology for the short-term and long-term storage of information, such as computer-readable instructions, data structures, program modules and Submodules and other data is executed in any device. Thus, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium including, without limitation, a storage device and / or a memory device. Such instructions, when executed by a processor, cause the processor to execute at least some of the methods described herein. In addition, the term "non-transitory computer readable medium" as used herein includes all tangible, computer-readable media including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and non-volatile media and removable and non-removable media such as firmware, physical and virtual storage, CD ROM, DVD and any other digital source such as a network or the Internet, as well as digital media to be developed, with the only exception when they are a transitory, propagating signal. Further, the term "real time" as used herein refers to at least one of the time of occurrence of the associated occurrences, the time of measuring and collecting predetermined data, the time of processing the data, and the time of a system response the events and the environment. In the embodiments described here, these activities and events occur essentially instantaneously. The depth of focus / defocus is used to assess the 3-D surface of a scene from a set of two or more images of that scene. The images are obtained by changing the camera parameters (typically the focus setting or the axial image plane position) and taken from the same point of view. The difference between the depth of focus and the depth of defocus is that in the first case it is possible to dynamically change the camera parameters during the surface assessment process, while in the second case it is not allowed. In addition, both problems are called either active or passive depth of focus / defocus, depending on whether it is possible to project a structured light onto the scene. While many computer vision techniques assess 3D surfaces using images obtained with pinhole cameras, we use cameras with real openings to determine depth from defocusing. Cameras with real openings have a shallow depth of field, which leads to images that only appear focused on a small 3-D piece of the scene. Image processing formation can be explained by optical geometry. The lens is made using the law for thin lenses, i. 1 / f = (1 / v) + (1 / u), modeled, where f is the focal length, u is the distance between the lens plane and the plane in focus in the scene, and v is the distance between the lens plane and the image plane. In the depth determination from focusing, a series of images are recorded, each with a shallow depth of field. In the simplest form of depth determination from focusing or defocusing, information is recorded from a large number of images and the set of images is examined with regard to those images which have the least amount of blurring, i.e. have the greatest amount of focus clarity. In various embodiments of the present disclosure, this approach is used to define areas within each image that are best in focus and combine these areas to build a single in-focus image. When determining the depth from defocusing, fewer images can be recorded and the degree of defocusing is modeled. The amount of defocus blur can be used to assess how far a specific image feature is from best focus. In this case, the blurring is typically modeled as a convolution of the in-focus image and an effective point spread function that can be geometrically calculated from: where R represents the blur radius, D represents the diameter of the collimator setting, f represents the focal length of lens 106, o represents the object distance to component 110, and s represents the image distance to image capture device 102. The information about distinct edges is analyzed with regard to focus clarity. For some surfaces, the clear edge information may not be available. When a surface has no inherent features, such as surface grain structure or other noticeable features that are visible, a different approach is needed. An alternative to using the inherent features of an object as a target is to project a pattern, such as lines, onto the surface. The frequency content of the blurring can then be modeled around a narrow band of the primary frequency (spacing) of the pattern projected onto the surface of the object. This assessment can then be made using a local operator over x and y of the form: where x ́ = x cos (θ) –y sin (θ), y ́ = –x sin (θ) + y cos (θ) and T is the primary period of the pattern projected onto the object, a is the standard deviation of the equivalent Gaussian filter, (θ) is the angle of illumination to the surface normal and (ϕ) represents the phase shift. In these approaches, it is believed that the effect of the blurring is primarily to widen the projected pattern and reduce the rate of change in intensity (derivative of contrast) of the edges. In some cases, such as auto-focus systems, only the contrast of the edges in each area is taken into account. Alternatively, the frequency content of the fuzzy is often modeled as a Laplace operator around a narrow band of the primary frequency (spacing) of the pattern projected onto the part. For an imaging system, the depth of focus and the system resolution are usually in conflict; the higher the resolution, the shallower the depth that the imaging system can focus and capture a clear image. Both high resolution and high imaging depth are desirable for detecting a target object. High resolution is required for imaging target features such as grain structure and surface scratches on a component that is being inspected. In one embodiment, as further described below, an imaging system is used to image the target object on the component being inspected using an imaging element, such as a lens, having a fixed optical path length and a focal point. An LCP (liquid crystal panel) and birefringent optics are positioned between the imaging element and the target to change the optical path length of the imaging system. A birefringent optical element is an element whose optical path length depends on the orientation of the polarization of the light and, depending on the geometry, can refer to a birefringent window or a birefringent lens. This results in two or more optical paths of different lengths, which causes refocusing of the resulting target object image. The change in the optical path length through the LCP and the birefringent optical element has the same effect on the focus / defocus of the image as would a change in the physical distance between the target object and the imaging system. Data from the target images of the focused and refocused optical path lengths are used to calculate a distance to the target from the lens. This can be referred to as calculating depth from focus or depth from defocus. In accordance with one embodiment, as described below, a method is described in which an LPC and birefringent element are attached to the camera lens and a transmitted electronic signal (voltage) is used to control the polarization rotation caused by the liquid crystal becomes. One or more different voltages are applied which causes the polarization rotation caused by the LPC to change. This then causes the light to adopt a different index of refraction path within the birefringent element, resulting in a different optical path length. Any variation in the optical path length results in changes in focus / defocus on the images similar to a physical change in the distance between the target object and the image pickup device. In either case, the image of the target object is captured using an image capturing device consisting of a camera or similar device, which captures images of the target object and generates the captured image data based on time or space parameters. Likewise, the imaging system may include additional components typically found in optical systems such as, but not limited to, additional lenses, mirrors, light filters, shutters, lighting devices, and electronic components. There are several methods of generating the focus shifts that are used to determine the distance between the target object and the lens using an LPC and birefringent element. In certain embodiments, two to three focus shifts on a scale between about 2 and about 10 millimeters are used. If the depth of field (ST) of the target object or surface feature is greater than the focus shift, the generated images will have an overlapping focus with the central "best focus" regions on either side of the overlap area. Depth of Field (ST) is defined as the imaging area through which a given size feature does not appear to produce a change in focus. If the ST is shorter than the path length shift, each image shows a band of clear focus at different depths on the target. Using a series of images taken at various distances from the target object, with displacements precisely controlled such that the change in position is known, provides knowledge of how the images change with changes in distance in the specific imaging system, what allows an estimate of the magnification at each point and as a result of which a corrected geometry is generated for the calculated in-focus position. In various embodiments, three images at three different known spacing distances are used. In this way, even if the imaging device is not repetitively positioned each time, the size of the pattern being viewed can be correctly calculated using a reference magnification of known dimensions. Achieving a desired magnification quickly with a high degree of accuracy can be very difficult, especially when viewing a feature or target object using a hand-held device. In some cases, a curved surface can make it impossible to achieve a fixed magnification for a given imaging device. This method allows the images to be used to calculate the correct geometry of the target object without excessive fixed facilities or other methods of manually achieving a reference image magnification. In the case of measuring creep by observing small changes in a two-dimensional (2D) target with a portable camera device, this method allows a high degree of repeatability of the measurement. Embodiments of the magnification correction systems described herein provide an inexpensive method of measuring a change in target object dimensions to determine an amount of creep a component is experiencing. In the embodiments described herein, an imaging system is used to provide real-time information of creep in a component. In particular, in the embodiments described here, an image recording device and processing functions for determining a distance to the component and an enlargement of the target object in a plurality of images are described. Correction is made for differences in magnification due to changes in distance between images. When the corrections are applied to the target image, the dimensions of the target object can be determined. Therefore, the embodiments described herein significantly reduce the set-up time for measuring creep in components, thereby reducing maintenance costs. In addition, the creep measurement determinations are constant over time. 1 is a schematic illustration of an inspection imaging system 100 that includes magnification correction in accordance with an exemplary embodiment of the present disclosure. In the exemplary embodiment, the inspection imaging system 100 includes an image capture device 102 configured to be able to shift its focus position (fp1, fp2, fp3). In various embodiments, the image recording device 102 comprises an image acquisition device 104, a lens 106 and in some embodiments a changer for changing the optical path 108, for example, but not limited to a birefringent element or a glass element. The image capture device 102 is capable of changing its focus position using at least one changer for changing the optical path 108 and a positioning mount (not shown) configured to move at least one of the image capture device 102 and a component 110 that is a target object 112 comprises transferring towards or away from one another. Transferring the image capture device 102 or a component 110 toward or away from each other changes a distance 113 therebetween. The distance 113 is used to determine a magnification of the target object 112 in each focus position. The target 112 is embodied in a pattern bonded or etched into a surface 114 of the component 110 or embodied in a feature of the surface 114 such as, but not limited to, a hole, recess, slot, protrusion and combinations thereof that add relief to surface 114. The inspection imaging system 100 includes a controller 116 that is configured to coordinate the operations of the inspection imaging system 100. The control device 116 coordinates the recording of images and the positioning of the image recording device 102 and the component 110 with respect to one another. The controller 116 includes one or more processors 118 connected to a user interface 120 and one or more storage devices 122. In operation, the controller 116 fetches instructions from one or more storage devices 122 which, when executed by one or more processors 118, instruct one or more processors 118 to set output parameters for capturing a plurality of images 112 of the target object 112, each Picture is taken at a different focus position. For example, the control device 116 initiates the recording of a first image at a first focus position 124, a second image at a second focus position 126 and a third image at a third focus position 128. The recorded images are transmitted to the control device 116, where they are immediately processed and / or saved for later use. [0051] FIG. 2 is a flow diagram of a method 200 for correcting a magnification of images of a target object using multiple focal planes. As used here, a focal plane or focal plane (FE) is an imaginary two-dimensional plane in front of the camera or the image recording device 22 at the focal point. FE represents the theoretical level of the sharpest focus and lies in the depth of field. The FE is parallel to the sensor (and perpendicular to the optical axis) of the camera or image capture device 22. Multiple focal planes relate to a plurality of images, each of which is captured at a different distance between the sensor and the target. Multiple planes of focus also refer to a plurality of images each captured at a different distance between the sensor and the target using the same optical parameters for each image (except for the distance between the sensor and the target). In the exemplary embodiment, method 200 includes capturing 202 a plurality of images of a target object. Each image of the target object is taken at a different distance from the target object. The method 200 also includes determining 204 a distance between a lens used to capture the plurality of images and the target object. The method 200 further comprises determining 206 a magnification of each recorded image, determining 208 a magnification correction with reference to a reference target object having known or calculable dimensions, determining 210 a change in a size of the target object and outputting 212 the determined change in a size of the target to a computer system 130. In one embodiment, the determined change in size of the target is output to a maintenance planning computer system that is communicatively connected to the inspection imaging system. In various embodiments, the determined change in size of the target object is output to a rate computing system that is configured to determine a rate of change of the target object and to predict a period of time before the target object exceeds a predetermined limit. The image capture device 102 (shown in FIG. 1) is used to capture images of the target object 112. The images are usually visual images that are recorded in the range of the visual wavelength of light. In various embodiments, other wavelengths, such as the infrared band of wavelengths, are used to capture the images. After each image is captured 202, the optical path distance between the image capture device 102 and the target 112 (shown in FIG. 1) is changed to capture the next image in a different focal plane. The imaging device 102 is not focused at every optical path distance, but instead the focus clarity or blurring of the edges of the target object 112 is used to determine a distance between the imaging device 102 and the target object 112. In addition, other optical parameters of the image recording device 102 are not set between recordings of the images. In some embodiments, one of the captured images is in the best focus position that has relatively clear, sharp lines. In other embodiments, the images captured at a different distance 113 are not in best focus and have blurry edges. A prediction of the distance between the image capture device 102 and the target object 112 is made based on the degree of blurring or the degree of clarity of these edges. One function that describes this change relates to a point spread function or other focus-based functions. By using the focus clarity to determine a distance between the image capture device 102 and target 112, the need for complicated and time consuming adjustment of the component 110 in a test bench to restore an original image for the comparison of the change in size of the target 112 is avoided. The lens point spread information 214 is used to determine 204 the distance between the lens 106 (shown in FIG. 1), which is used to capture the plurality of images, and the target 112. Because changing the distance 113 (shown in FIG.) In front of the lens 106 to the target object 112 also changes the magnification of the target object 112 (the target object 106 looks larger for images at a closer distance and smaller for images at a further distance), a magnification is determined 206 for each captured image using focal length information 216 for lens 106. A change in magnification from image to image is determined using distance 113 and focal length information 216. A magnification correction with reference to a reference target object is also determined 208. Using the corrected magnification, a change in a size of the target object is determined 210. The comparison of a currently determined target object size with a previously determined target object size is carried out in the control device 116 or the control device 116 transfers such Tasks to other components (not shown). In various embodiments, the images and / or results of the determinations are output 212 to a user (not shown) at the user interface 120 or a maintenance computer system, which may include a maintenance planning computer system or a rate computer system. In various embodiments, the maintenance planning and rate computing functions of the rate computing system can be performed by individual computers or the functions can be integrated into part of the controller 116. 3 is an example of changes in the depth of focus for a focus-based determination on the basis of three images 300 of a component 301, which are recorded at different depths of focus. FIG. 4 is a perspective view of a component 301 (shown in FIG. 3). In the exemplary embodiment, a plurality of images are used that have been captured at different depths of focus or focus positions by a method for determining depth from defocusing (DFD) using a configuration shown in FIG. 1. DFD is well suited for portable component measurements where edges and features on the component are visible and sufficient to be used as a means of generating image data in the region of such features. DFD has not been widely used in industrial measurements because it does not rely on clear local features and, as such, does not apply well to smooth, clean surfaces. In a depth determination from focus, one way to provide information is to take a set of images that includes a large number of images and search through an area within each that has the least amount of blurring. In one embodiment, this approach is used to define regions within each image that are best in focus, and these regions are then combined to build a single in-focus image or determine a distance to image capture device 102. In this embodiment, a simple corner 302 contains a target object, such as lines (which can be the texture on the part or projected lines). An area over which each image can be seen to be clearly in focus is marked in the graphic representation with thick arrows. In a first image 304, which covers a tip 305 of the corner 302 and extends partially downwards along a slope 307, arrows 306 indicate an in-focus part of the image 304. In a second image 308, which covers a tip 305 and extends partially downwards along an incline 307, arrows 310 indicate an in-focus part of the image 308. In a third image 312, which covers the tip 305 and extends partially downward along the slope 307, arrows 314 indicate an in-focus part of the image 312. The magnification correction inspection imaging system described above provides a cost effective method of measuring a change in target dimensions to determine an amount of creep a component experiences. In the embodiments described herein, an imaging system is provided for providing real-time information regarding creep in a component. In particular, in the embodiments described here, an image recording device and processing functions are used to determine a distance to the component and a magnification of the target object in a plurality of images. A correction is determined for the differences in magnification due to the changes in distance between images. When the correction is applied to the target image, the dimensions of the target object can be determined. Therefore, the embodiments described herein significantly reduce the set-up time for measuring creep in components, thereby reducing maintenance costs. In addition, the measured creep values are consistent over time. An exemplary technical effect of the methods, systems, and devices described herein comprises at least one of: (a) using an image capture device to capture multiple images of a target object, each image being captured from a different distance from the target object; (b) using a focus clarity of the target object in the images to determine the distance from the target object; and (c) determining a magnification and a correction to the magnification of the image at each distance. While specific features of various embodiments of the disclosure are shown in some drawings and not others, it is for simplicity only. In accordance with the principles of the disclosure, reference and / or claim may be made to any feature of any drawing in combination with any feature of any other drawing. [0061] Some embodiments involve the use of one or more electronic or computing devices. Such devices typically include a processor or controller such as a general purpose central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, a reduced instruction set computer (RISC), an application specific integrated circuit (ASIC), a programmable one logic circuit (PLC) and / or any other circuit or processor capable of performing the functions described herein. The methods described herein can be encoded as executable instructions embodied in a computer readable medium including, without limitation, an archiving device and / or storage device. Such instructions, when executed by a processor, cause the processor to execute at least some of the methods described herein. The above examples are exemplary only and are not intended to limit the definition and / or meaning of the term processor in any way. In this written description, examples are used to disclose the embodiments, including the preferred embodiment, and also to enable any person skilled in the art to practice the embodiments, including making and using any devices or systems and practicing any integrated ones Procedure. The patentable scope of the disclosure is defined by the claims, and may include other examples that will occur to those skilled in the art. Such examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims. A method 200 for correcting magnification in image measurements is carried out using a computing device including one or more processors 118 connected to a user interface 120 and one or more storage devices 122. The method 200 comprises the acquisition of a plurality of images of a target object 112. Each image is acquired at a different distance 113 from the target object 112. The method 200 also includes determining a distance 113 between a lens 106, which is used to capture the plurality of images, and the target object 112, and determining a magnification of each captured image. The method 200 further comprises the determination of a magnification correction with reference to a reference object, the determination of a change in a size of the target object 112 and the outputting of the determined change in a size of the target object 112. REFERENCE LIST Magnification correction system 100 image recording device 102 image capture device 104 lens 106 changer for changing the optical path 108 component 110 target object 112 distance 113 surface 114 control device 116 one or more processors 118 user interface 120 one or more storage devices 122 first focal position 124 second focal position 126 third focal position 128 method 200 Record 202 Determine 204 Determine 206 Determine 208 Determine 210 Output 212 Lens point spread information 214 Focal length information 216 Images 300 Component 301 Corner 302 First image 304 Tip 305 Arrows 306 Tilt 307 Second image 308 Arrows 310 Third image 312 Arrows 314
权利要求:
Claims (10) [1] An inspection imaging system (100) which is configured to use a magnification correction from a plurality of focal planes (124, 126, 128), the system (100) comprising: an image recording device (102) comprising an image acquisition device (104) and a lens (106) in optical communication with the image capture device (104); a controller (116) comprising: a user interface (120); one or more storage devices (122); and one or more processors (118) communicatively connected to the user interface (120) and the one or more storage devices (122), the one or more processors (118) being programmed to: a plurality of images of a Capturing (202) the target object (112), each image (304, 308, 312) of the plurality of images (304, 308, 312) being captured at a different distance (113) from the target object (112); determine (204) a distance (113) between the lens (106) used to capture the plurality of images (304, 308, 312) and the target object (112); determine (206) a magnification of each captured image (304, 308, 312); determine a magnification correction with respect to a reference (208); determine (210) a change in a size of the target object (112); and output (212) the determined change in a size of the target object (112). [2] The system (100) according to claim 1, further comprising a changer for changing the optical path (108), which is configured to change the optical path of the light between the target object (112) and the lens (106). [3] The system (100) of claim 1 or 2, wherein the processor is further programmed to receive point spread function information for the lens (106) for determining the distance (113) between the lens (106) and the target (112). [4] 4. System (100) according to one of claims 1 to 3, wherein the processor is further programmed to determine a magnification of each captured image (304, 308, 312) using focal length information of the lens (106). [5] The system (100) according to any preceding claim, wherein the processor is further programmed to determine the distance (113) between the lens (106) and the target object (112) using a focus clarity of the image. [6] 6. System (100) according to one of the preceding claims, wherein the processor is also programmed to calculate the distance (113) between the lens (106) and the target object (112) using a function for determining depth from focusing and / or a function to determine the depth from defocusing. [7] 7. Computer-implemented method (200) for correcting a magnification in image measurements, wherein the method (200), which is carried out using a computing device (116), which includes one or more with a user interface (120) and one or more storage devices (122) connected processors (118), the method (200) comprising:Taking (202) a plurality of images (304, 308, 312) of a target object (112), each image of the plurality of images being taken at a different distance (113) from the target object (112);Determining (204) a distance (113) between a lens (106) used to capture the plurality of images and the target object (112);Determining (206) a magnification of each captured image (304, 308, 312);Determining (208) a magnification correction with reference to a reference;Determining (210) a change in a size of the target object (112); andOutputting (212) the determined change in a size of the target object (112). [8] The method (200) of claim 7, wherein determining a distance (113) between a lens (106) used to capture a plurality of images (304, 308, 312) and the target object (112) is determining a distance (113) between a lens (106) and the target object (112) comprises collecting point spread function information for the lens (106). [9] 9. The method (200) of claim 7 or 8, wherein capturing a plurality of images (304, 308, 312) of a target object (112) capturing a plurality of images (304, 308, 312) of a pattern that is associated with a Surface (114) of a component (110) is connected. [10] 10. One or more non-transitory computer-readable storage media on which computer-executable instructions are arranged, wherein, when executed by at least one processor, the computer-executable instructions cause the processor to:capturing (202) a plurality of images (304, 308, 312) of a target object (112), each image of the plurality of images (304, 308, 312) being captured at a different distance (113) from the target object (112);determine (204) a distance (113) between a lens (106) used in capturing a plurality of images and the target object (112);determine (206) a magnification of each captured image (304, 308, 312);determine a magnification correction with respect to a reference (208);determine a change in a size of the target object (210); andto output the determined change in a size of the target object (112) to a rate computer system which is configured to determine a rate of change of the target object (112) and to predict a period of time before the target object (112) exceeds a predetermined limit.
类似技术:
公开号 | 公开日 | 专利标题 DE112014005866B4|2018-08-02|Improvement of plenoptic camera resolution DE102017220104A1|2018-05-24|Lens system with variable focal length with multi-level image processing with extended depth of focus DE102012106584B4|2021-01-07|Method and device for image reconstruction DE102014211657A1|2015-01-15|Improving the reproducibility of the focus height in an image inspection system DE102014206309A1|2014-10-09|System and method for obtaining offset images for use for improved edge resolution DE102014207095A1|2014-10-30|Edge measurement video tool with robust edge discrimination travel DE112017001464B4|2021-09-23|Distance measuring device and distance measuring method EP3557523B1|2021-07-28|Method for generating a correcting model of a camera for correcting an imaging error DE102014006717A1|2015-11-05|Method for generating a three-dimensional information of an object with a digital microscope and data processing program for processing the method DE102014002084A1|2015-04-16|Method and apparatus for measuring the deflection of light rays through an object structure or medium CH710375A2|2016-05-13|Method and system for recording and evaluating images in several focal planes. DE102016002186A1|2017-08-24|Method and image processing device for determining a geometric measured variable of an object WO2016169862A1|2016-10-27|Method for calibrating a polarization axis measuring device and method for determining polarization axes of spectacle lenses for a polarization axis measuring device EP3071928B1|2017-09-06|Image capturing simulation in a coordinate measuring apparatus DE102007017649A1|2008-10-16|A method for determining the focal position of at least two edges of structures on a substrate EP3304025A1|2018-04-11|Method and apparatus for determining surface data and/or measurement data relating to a surface of an at least partially transparent object DE102008036710A1|2010-02-11|Method for three-dimensional optical measuring of object with topometric measuring process, involves selecting partial areas of objects such that reflections of patterns on surface of object do not influence patterns in partial areas DE102019206042A1|2019-10-31|VARIABLE FIRE LENS SYSTEM WITH QUASI SINCULAR PERIODIC INTENSITY MODULATED LIGHT DE102015103785A1|2016-09-22|Method and device for calibrating a camera DE202014105027U1|2014-10-29|Device for three-dimensional eye determination DE102011055967B4|2016-03-10|Measuring method and device for carrying out the measuring method DE102016202928B4|2018-08-09|Improved autofocus method for a coordinate measuring machine DE102013204015A1|2014-09-11|Spectral decomposition for Mikrospiegelkippwinkelbestimmung DE102015117276A1|2017-04-13|Method for measuring a test object with improved measuring accuracy and device DE102018209176A1|2019-12-12|Method and device for the reconstruction of time series, degraded by load factors
同族专利:
公开号 | 公开日 JP6742713B2|2020-08-19| DE102015118278A1|2016-05-12| US20160134816A1|2016-05-12| US9338363B1|2016-05-10| JP2016118535A|2016-06-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US1390983A|1921-09-20|A corpora | US3053142A|1954-08-04|1962-09-11|Eastman Kodak Co|Film pitch compensating mechanism for photographic apparatus| GB1140566A|1965-09-03|1969-01-22|Rank Organisation Ltd|Improvements in or relating to control systems| US4137459A|1978-02-13|1979-01-30|International Business Machines Corporation|Method and apparatus for applying focus correction in E-beam system| US4435052A|1980-10-31|1984-03-06|Guyton D L|Ophthalmic test apparatus having magnification compensation| EP0116953A2|1983-02-18|1984-08-29|Hitachi, Ltd.|Alignment apparatus| CA1308203C|1989-06-01|1992-09-29|Nanoquest Inc.|Magnification compensation apparatus| JPH0575804A|1991-09-17|1993-03-26|Toshiba Corp|Picture forming device| US5740505A|1995-11-06|1998-04-14|Minolta Co, Ltd.|Image forming apparatus| KR100256815B1|1996-06-24|2000-06-01|김영환|Magnifi-cation measuring mark| US5729512A|1996-07-03|1998-03-17|Zen Research N.V.|Magnification and tracking error correction system for multiple track optical disk reader| JP3720582B2|1998-06-04|2005-11-30|キヤノン株式会社|Projection exposure apparatus and projection exposure method| US6493019B1|1999-01-29|2002-12-10|Canon Kabushiki Kaisha|Image forming apparatus| US6541283B1|1999-10-21|2003-04-01|Koninklijke Philips Electronics N.V.|Method for determining magnification error portion of total misalignment error in a stepper| JP4199893B2|1999-12-28|2008-12-24|株式会社リコー|Image forming apparatus| US6839469B2|2000-01-21|2005-01-04|Lam K. Nguyen|Multiparallel three dimensional optical microscopy system| US6667756B2|2001-08-27|2003-12-23|Xerox Corporation|Method of shifting an image or paper to reduce show through in duplex printing| US7072096B2|2001-12-14|2006-07-04|Digital Optics International, Corporation|Uniform illumination system| US6717676B2|2002-03-12|2004-04-06|Eastman Kodak Company|Method for measuring magnification of an afocal optical system| US8610944B2|2008-12-17|2013-12-17|Xerox Corporation|Method and apparatus for slow scan magnification adjustment using non-redundant overwriting| JP4995854B2|2009-03-11|2012-08-08|富士フイルム株式会社|Imaging apparatus, image correction method, and image correction program| EP2462452A1|2009-08-04|2012-06-13|3M Innovative Properties Company|Sampling devices and methods of use| US8600123B2|2010-09-24|2013-12-03|General Electric Company|System and method for contactless multi-fingerprint collection| US20120166102A1|2010-12-23|2012-06-28|Edward James Nieters|Method and system for online creep monitoring| US8971588B2|2011-03-30|2015-03-03|General Electric Company|Apparatus and method for contactless high resolution handprint capture| US8340456B1|2011-10-13|2012-12-25|General Electric Company|System and method for depth from defocus imaging|JP2019152787A|2018-03-05|2019-09-12|株式会社ミツトヨ|Focal distance variable lens control method and focal distance variable lens device|
法律状态:
2017-03-15| NV| New agent|Representative=s name: GENERAL ELECTRIC TECHNOLOGY GMBH GLOBAL PATENT, CH | 2019-01-15| AZW| Rejection (application)|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US14/534,837|US9338363B1|2014-11-06|2014-11-06|Method and system for magnification correction from multiple focus planes| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|